Remember when reading Xtion Pro live depth data and color data, the main use is directly Read NI class. Using this direct method is more troublesome, but also to write a special read program, call NI's function. Now the OPENCV2 integrates the Openni, which can read depth data and color data directly using Videocapture. In contrast, there may be some features that are not so complete.So how to read Openni directly with OpenCV? The following steps are m
Turtlebot has an application panorama to realize iPhone360 panorama photography function. The official use of the Create base and Kinnect, when using the Roomba base and the Xtion Pro live package, found that the tutorial does not start.1. StartRoslaunch turtlebot_bringup minimal.launch \ Load wheel drive Open another Shell windowRosservice Call Turtlebot_panorama/take_pano 0 360.0 30.0 0.3 ordered a camera spinThis time found that Roomba-driv
prepare for Ros and Openni2 look at my last blog download source code, Pointcloud_to_laserscan:https://github.com/ros-perception/pointcloud_ To_laserscan compiled source code open source Launch the following Sample_node.launch file (of course,
supplier for the Kinect generation, located in Israel and is also the maintainer of the open source development package Openni. Since being acquired by Apple, Openni has ceased to update. Apple is expected to move in the near future, possibly in combination with television or game consoles, to change the world again.Asus Xtion with PrimeSense chip authorization, in the near future PrimeSense will stop supply chip, then the market will not buy
: used to switch the standard mode and Android mode for Android application development.
3. Kinect
With regard to the history of Kinect, it was launched in November 4, 2010 and sold 0.8 million sets in 60 days, recording the Guinness Book of Records as "the fastest-selling consumer electronic products in history. In June 1, 2009, Microsoft launched the project natal project and renamed it Kinect in June 13, 2010, but did not provide any drivers. In November 2010, $300 was awarded to Hector Mart
later
Full body tracking: full body tracking
Only calculates positions for the joints, not rotations joints only coordinate data, no rotation data
Only tracks the full body, no upperbody or hands only mode can only trace the whole body, does not include a specific tracing mode: for example, only tracker hands or upper body
Seems to consume more CPU power than OpenNI/NITE (not properly benchmarked) seems to consume more CPU than OpenNI/NITE (no proper benchmark is used)
No ge
support for the primesense and the Asus wavi xtion sensors? (Can anyone confirm this ?) Primesense and Asus wavi xtion hardware platforms are not supported
Only supports win7 (x86 x64) only supports win7 (32-bit and 64-bit)
No support for unity3d game engine does not support the unity3d Game Engine
No built in support for record/playback to disk does not support data record or playback to Hard Disk
No
library settings that are required at compile time:prefix=/usrexec_prefix=${prefix}libdir=${exec_prefix}/libincludedir=${prefix}for2.2.0.0Cflags: -I${includedir}Libs: -L${libdir}-lOpenNI2 -L${libdir}-l-l-lPS1080.soWith this file, pkg-config you can output the compiler, the parameters required by the linker, and the version information of the installed package. Check if it can be found correctly:$ pkg-config --modversion libopenni2If the version number is 2.2.0.0 , no problem.3. TestingNext, tes
audio support, adjustable tilt motor, and body tracking for bone tracking: non-standard posture detection (surrender posture relative to Openni ...) ), the head, hands, feet, clavicle detection and joint occlusion details of the treatment more carefully (but the accuracy is not determined).In addition, multi-sensor support (multiple Kinect);Disadvantages:Microsoft's restrictions on non-commercial use. In addition, gesture recognition and tracking are not provided, and the RGB image/depth image
Order
I recently because of learning slam and contact Ros Indigo and Kinect, according to the truth, the various data of Ros Indigo should be very rich, but in the Kinect this piece, online methods appear mixed, The author in accordance with the official Ros recommended installation method after installation, delay in accordance with the above statement to get the legend of the depth of the image, then began a variety of information, various installation of the era ... it's really hard ... fina
kits1) Official SDK:Strengths:Provides audio support, adjustable tilt motor, and body tracking for bone tracking: non-standard posture detection (surrender posture relative to Openni ...) )。 Details such as head, hands, feet, clavicle detection and joint occlusion are more carefully handled (but the accuracy is not determined).In addition, multi-sensor support (multiple Kinect);Disadvantages:Microsoft's restrictions on non-commercial use.In addition, gesture recognition and tracking are not ava
v1.5). The joints can only be traced to the whole body, excluding specific tracing modes, such as tracing hands or upper body;
Compared with openni/nite, it seems to consume more CPU (no proper benchmark is used );
Does not include a gesture recognition system;
Primesense and Asus wavi xtion hardware platforms are not supported, but win7 (32-bit and 64-bit) is supported );
The unity3d game engine is not supported;
Data Record or playback to hard
Original blog: Reprint please indicate source: http://www.cnblogs.com/zxouxuewei/Premise:1. This tutorial ensures that you have successfully installed the Kinect or xtion depth camera driver and is able to use it properly. Driver installation can refer to my blog http://www.cnblogs.com/zxouxuewei/p/5271939.html2. You already have a platform that can be manually or automatically moved, and your Deepin camera is actually installed on the mobile platform
In machine vision, the depth image (depth image) produced by a 3D camera typically requires registration (registration) to generate a registration depth image (registed depth image). In fact, the purpose of registration is to make the depth map and color graph coincident, that is, the image coordinate system of the depth image is converted to the color image coordinate system. Let's describe the process of its derivation.To describe the convenience, first make some simple assumptions. The left c
projector to project a coded beam to a target object, generating a feature point, and then calculating the distance between the camera's centroid and the feature point based on the projection mode and the geometric pattern of the projected light, thus obtaining the depth information of the generating feature points, and realizing the model reconstruction. This kind of encoded beam is structured light, including a variety of specific styles of points, lines, polygons and other patterns. The stru
Ros (Fuerte) +rgbdslam_freiburg+installation1. Download Fuerte version of Rgbdslam_freiburg. Because Google's source is now unable to find the server, unless the wall is turned. So we are now downloading from the ros.org package: Http://www.ros.org/browse/list.php?package_type=packagedistro=fuerteNote that the above packages need to be downloaded in Ros workspace.2. Install dependencies$sudo apt-get Install Libglew1.6-devlibdevil-dev Libsuitesparse-dev$sudo Apt-get installros-fuerte-octomap-mapp
Recently studied RGBD's slam, collected two RGBD mapping's Open source Toolkit1.rgbdslam2A. Installation method:/opt/ros/indigo/setup.bashmkdir -P~/rgbdslam_catkin_ws/~/rgbdslam_catkin_ws/ ~/rgbdslam_catkin_ws/catkin_makesource devel/~/rgbdslam_catkin_ws/srcgit clone https: //github.com/felixendres/rgbdslam_v2.gitcd ~/rgbdslam_catkin_ws/Install Rgbdslamcatkin_makeB. Methods of implementationStart the cameraRoslaunch Openni2_launch Openni2.launchIf you use the
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.